1 research outputs found

    Towards a Comprehensive Human-Centred Evaluation Framework for Explainable AI

    Full text link
    While research on explainable AI (XAI) is booming and explanation techniques have proven promising in many application domains, standardised human-centred evaluation procedures are still missing. In addition, current evaluation procedures do not assess XAI methods holistically in the sense that they do not treat explanations' effects on humans as a complex user experience. To tackle this challenge, we propose to adapt the User-Centric Evaluation Framework used in recommender systems: we integrate explanation aspects, summarise explanation properties, indicate relations between them, and categorise metrics that measure these properties. With this comprehensive evaluation framework, we hope to contribute to the human-centred standardisation of XAI evaluation.Comment: This preprint has not undergone any post-submission improvements or corrections. This work was an accepted contribution at the XAI world Conference 202
    corecore